264 research outputs found
Recommended from our members
Improving Patch-Based Convolutional Neural Networks for MRI Brain Tumor Segmentation by Leveraging Location Information.
The manual brain tumor annotation process is time consuming and resource consuming, therefore, an automated and accurate brain tumor segmentation tool is greatly in demand. In this paper, we introduce a novel method to integrate location information with the state-of-the-art patch-based neural networks for brain tumor segmentation. This is motivated by the observation that lesions are not uniformly distributed across different brain parcellation regions and that a locality-sensitive segmentation is likely to obtain better segmentation accuracy. Toward this, we use an existing brain parcellation atlas in the Montreal Neurological Institute (MNI) space and map this atlas to the individual subject data. This mapped atlas in the subject data space is integrated with structural Magnetic Resonance (MR) imaging data, and patch-based neural networks, including 3D U-Net and DeepMedic, are trained to classify the different brain lesions. Multiple state-of-the-art neural networks are trained and integrated with XGBoost fusion in the proposed two-level ensemble method. The first level reduces the uncertainty of the same type of models with different seed initializations, and the second level leverages the advantages of different types of neural network models. The proposed location information fusion method improves the segmentation performance of state-of-the-art networks including 3D U-Net and DeepMedic. Our proposed ensemble also achieves better segmentation performance compared to the state-of-the-art networks in BraTS 2017 and rivals state-of-the-art networks in BraTS 2018. Detailed results are provided on the public multimodal brain tumor segmentation (BraTS) benchmarks
Car-following Behavior Analysis of Left-turn Vehicles at Signalized Intersections
In order to enrich the car-following theory of urban signalized intersections, and reveal the car-following characteristics of left turn at signalized intersections, the car-following behavior of left turn at signalized intersections is studied. The car-following data acquisition test which was based on high precision GPS was designed. And the car-following characteristics of left-turning vehicles at signalized intersections with different turning radii were analyzed. Based on which, the influence of radius on the car-following behavior was explained, and the New Full Velocity Difference (NFVD) model was developed. The genetic algorithm was used to calibrate the parameters of the NFVD model. The stability and accuracy of the calibrated model was further analyzed by using field data. The results showed that the average speed of the following car increases with the turning radius of the signalized intersection; the car-following speed which the highest frequency occurs under different turning radii tends to increase with the enlargement of turning radius; the larger the average headway distance between the car-following vehicles, the more intense of the driver’s response to the deceleration of the front vehicle. These findings could be used in traffic simulation and to make engineering decisions
Solid-state ensemble of highly entangled photon sources at rubidium atomic transitions
Semiconductor InAs/GaAs quantum dots grown by the Stranski-Krastanov method
are among the leading candidates for the deterministic generation of
polarization entangled photon pairs. Despite remarkable progress in the last
twenty years, many challenges still remain for this material, such as the
extremely low yield (<1% quantum dots can emit entangled photons), the low
degree of entanglement, and the large wavelength distribution. Here we show
that, with an emerging family of GaAs/AlGaAs quantum dots grown by droplet
etching and nanohole infilling, it is possible to obtain a large ensemble
(close to 100%) of polarization-entangled photon emitters on a wafer without
any post-growth tuning. Under pulsed resonant two-photon excitation, all
measured quantum dots emit single pairs of entangled photons with ultra-high
purity, high degree of entanglement (fidelity up to F=0.91, with a record high
concurrence C=0.90), and ultra-narrow wavelength distribution at rubidium
transitions. Therefore, a solid-state quantum repeater - among many other key
enabling quantum photonic elements - can be practically implemented with this
new material
Coresets for Relational Data and The Applications
A coreset is a small set that can approximately preserve the structure of the
original input data set. Therefore we can run our algorithm on a coreset so as
to reduce the total computational complexity. Conventional coreset techniques
assume that the input data set is available to process explicitly. However,
this assumption may not hold in real-world scenarios. In this paper, we
consider the problem of coresets construction over relational data. Namely, the
data is decoupled into several relational tables, and it could be very
expensive to directly materialize the data matrix by joining the tables. We
propose a novel approach called ``aggregation tree with pseudo-cube'' that can
build a coreset from bottom to up. Moreover, our approach can neatly circumvent
several troublesome issues of relational learning problems [Khamis et al., PODS
2019]. Under some mild assumptions, we show that our coreset approach can be
applied for the machine learning tasks, such as clustering, logistic regression
and SVM
One-domain-one-input: adaptive random testing by orthogonal recursive bisection with restriction
One goal of software testing may be the identification
or generation of a series of test cases that can detect a fault with as few test executions as possible. Motivated by insights from research into failure-causing regions of input domains, the even-spreading (even distribution) of tests across the input domain has been identified as a useful heuristic to more quickly find failures. This finding has encouraged a shift in focus from traditional random testing (RT) to its enhancement, adaptive random testing (ART), which retains the randomness of test input selection, but also attempts to maintain a more evenly distributed spread of test inputs across the input domain. Given that there are different ways to achieve the even distribution, several different ART methods and approaches have been proposed. This paper presents a new ART method, called ART-ORB, which explores the advantages of repeated geometric bisection of the input domain, combined with restriction regions, to evenly spread test inputs. Experimental results show a better performance in terms of fewer test executions than RT to find failures. Compared with other ART methods, ART-ORB has comparable performance (in terms of required test executions), but incurs lower test input selection overheads, especially in higher dimensional input space. It is recommended that ART-ORB be used in testing situations involving expensive test input execution
Delicate Textured Mesh Recovery from NeRF via Adaptive Surface Refinement
Neural Radiance Fields (NeRF) have constituted a remarkable breakthrough in
image-based 3D reconstruction. However, their implicit volumetric
representations differ significantly from the widely-adopted polygonal meshes
and lack support from common 3D software and hardware, making their rendering
and manipulation inefficient. To overcome this limitation, we present a novel
framework that generates textured surface meshes from images. Our approach
begins by efficiently initializing the geometry and view-dependency decomposed
appearance with a NeRF. Subsequently, a coarse mesh is extracted, and an
iterative surface refining algorithm is developed to adaptively adjust both
vertex positions and face density based on re-projected rendering errors. We
jointly refine the appearance with geometry and bake it into texture images for
real-time rendering. Extensive experiments demonstrate that our method achieves
superior mesh quality and competitive rendering quality.Comment: ICCV 2023 camera-ready, Project Page: https://me.kiui.moe/nerf2mes
Efficient Test-Time Model Adaptation without Forgetting
Test-time adaptation (TTA) seeks to tackle potential distribution shifts
between training and testing data by adapting a given model w.r.t. any testing
sample. This task is particularly important for deep models when the test
environment changes frequently. Although some recent attempts have been made to
handle this task, we still face two practical challenges: 1) existing methods
have to perform backward computation for each test sample, resulting in
unbearable prediction cost to many applications; 2) while existing TTA
solutions can significantly improve the test performance on out-of-distribution
data, they often suffer from severe performance degradation on in-distribution
data after TTA (known as catastrophic forgetting). In this paper, we point out
that not all the test samples contribute equally to model adaptation, and
high-entropy ones may lead to noisy gradients that could disrupt the model.
Motivated by this, we propose an active sample selection criterion to identify
reliable and non-redundant samples, on which the model is updated to minimize
the entropy loss for test-time adaptation. Furthermore, to alleviate the
forgetting issue, we introduce a Fisher regularizer to constrain important
model parameters from drastic changes, where the Fisher importance is estimated
from test samples with generated pseudo labels. Extensive experiments on
CIFAR-10-C, ImageNet-C, and ImageNet-R verify the effectiveness of our proposed
method.Comment: 15 pages, conferenc
- …